This notebook is intended to be an introduction to using the SegmentationEvaluator in metriculous for visualising and comparing the results of models for image segmentation tasks. This notebook builds on the concepts introduced in the quickstart.ipynb.
# automatically show plots inside the notebook
%matplotlib inline
# reload all modules before executing code
%load_ext autoreload
%autoreload 2
import numpy as np
np.random.seed(42)
SegmentationEvaluator¶For this usage example we will use randomly generated images and masks and generate comparisons between three models
image_size = [2, 256, 256]
num_classes = 3
class_names = ["dog", "tree", "cat"]
We assume that the labels are from 0 to num_classes - 1 and they are in the order as mentioned in the class_names list. For illustration, in this example, the labelling will be as follows
| Label | Class |
|---|---|
| 0 | dog |
| 1 | tree |
| 2 | cat |
ground_truth = np.random.choice([0,0,1,1,2], size=image_size)
ground_truth.shape
For the purpose of this demonstration, we will generate the predictions of the three models randomly, however in a real use case, the predictions will come from the models that you want to compare and will have the same shape as ground_truth
num_models = 3
models = []
for i in range(num_models):
models.append({
"name": f"Model {i + 1}",
"predictions": np.random.choice([0, 1, 1, 2, 2, 2], size=image_size)})
import metriculous
metriculous.Comparator(
metriculous.evaluators.SegmentationEvaluator(
num_classes=num_classes,
class_names=class_names,
class_weights=[0.11, 0.54, 0.35]
),
).compare(
ground_truth=ground_truth,
model_predictions=[model["predictions"] for model in models],
model_names=[model["name"] for model in models],
).display()